6 research outputs found

    Benchmarking Academic Anatomic Pathologists: The Association of Pathology Chairs Survey

    Get PDF
    The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA) or Vizient-AAMC Faculty Practice Solutions Center1 (FPSC) databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical ‘‘full-time faculty’’ (0.60 clinical full-time equivalent and above). The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative value unit productivity approximated MGMA and FPSC benchmark data, we conclude that more rigorous standardization of academic faculty effort assignment will be needed to improve the value of work relative value unit measurements of faculty productivity

    Benchmarking Academic Anatomic Pathologists

    No full text
    The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA) or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC) databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above). The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative value unit productivity approximated MGMA and FPSC benchmark data, we conclude that more rigorous standardization of academic faculty effort assignment will be needed to improve the value of work relative value unit measurements of faculty productivity

    Benchmarking Academic Anatomic Pathologists

    No full text
    The most common benchmarks for faculty productivity are derived from Medical Group Management Association (MGMA) or Vizient-AAMC Faculty Practice Solutions Center ® (FPSC) databases. The Association of Pathology Chairs has also collected similar survey data for several years. We examined the Association of Pathology Chairs annual faculty productivity data and compared it with MGMA and FPSC data to understand the value, inherent flaws, and limitations of benchmarking data. We hypothesized that the variability in calculated faculty productivity is due to the type of practice model and clinical effort allocation. Data from the Association of Pathology Chairs survey on 629 surgical pathologists and/or anatomic pathologists from 51 programs were analyzed. From review of service assignments, we were able to assign each pathologist to a specific practice model: general anatomic pathologists/surgical pathologists, 1 or more subspecialties, or a hybrid of the 2 models. There were statistically significant differences among academic ranks and practice types. When we analyzed our data using each organization’s methods, the median results for the anatomic pathologists/surgical pathologists general practice model compared to MGMA and FPSC results for anatomic and/or surgical pathology were quite close. Both MGMA and FPSC data exclude a significant proportion of academic pathologists with clinical duties. We used the more inclusive FPSC definition of clinical “full-time faculty” (0.60 clinical full-time equivalent and above). The correlation between clinical full-time equivalent effort allocation, annual days on service, and annual work relative value unit productivity was poor. This study demonstrates that effort allocations are variable across academic departments of pathology and do not correlate well with either work relative value unit effort or reported days on service. Although the Association of Pathology Chairs–reported median work relative value unit productivity approximated MGMA and FPSC benchmark data, we conclude that more rigorous standardization of academic faculty effort assignment will be needed to improve the value of work relative value unit measurements of faculty productivity

    Use of Pathology Data to Improve High-Value Treatment of Cervical Neoplasia

    No full text
    We investigated the influence of pathology data to improve patient outcomes in the treatment of high-grade cervical neoplasia in a joint pathology and gynecology collaboration. Two of us (B.S.D. and M.D.) reviewed all cytology, colposcopy and surgical pathology results, patient history, and pregnancy outcomes from all patients with loop electrosurgical excision procedure specimens for a 33-month period (January 2011-September 2013). We used this to determine compliance to 2006 consensus guidelines for the performance of loop electrosurgical excision procedure and shared this information in 2 interprofessional and interdisciplinary educational interventions with Obstetrics/Gynecology and Pathology faculty at the end of September 2013. We simultaneously emphasized the new 2013 guidelines. During the postintervention period, we continued to provide follow-up using the parameters previously collected. Our postintervention data include 90 cases from a 27-month period (October 2013-December 2015). Our preintervention data include 331 cases in 33 months (average 10.0 per month) with 76% adherence to guidelines. Postintervention, there were 90 cases in 27 months (average 3.4 per month) and 96% adherence to the 2013 (more conservative) guidelines ( P < .0001, χ 2 test). Preintervention, the rate of high-grade squamous intraepithelial lesion in loop electrosurgical excision procedures was 44%, whereas postintervention, there was a 60% high-grade squamous intraepithelial lesion rate on loop electrosurgical excision procedure ( P < .0087 by 2-tailed Fisher exact test). The duration between diagnosis of low-grade squamous intraepithelial lesion and loop electrosurgical excision procedure also increased significantly from a median 25.5 months preintervention to 54 months postintervention ( P < .0073; Wilcoxon Kruskal-Wallis test). Postintervention, there was a marked decrease of loop electrosurgical excision procedure cases as well as better patient outcomes. We infer improved patient safety, and higher value can be achieved by providing performance-based pathologic data

    Benchmarking Subspecialty Practice in Academic Anatomic Pathology

    No full text
    Assessment of physician workloads has become increasingly important in modern academic physician practice, where it is commonly used to allocate resources among departments, to determine staffing, and to set the compensation of individual physicians. The physician work relative value unit system is a frequently used metric in this regard. However, the application of this system to the practice of pathology has proven problematic. One area of uncertainty is the validity of using work relative value unit norms that were derived from general surgical pathology practice to assess the various subspecialties within anatomic pathology. Here, we used data from the 2017 Association of Pathology Chairs practice survey to assess salary and work relative value unit data for single-subspecialty practitioners in US academic pathology departments in the prior year (2016). Five subspecialties were evaluated: dermatopathology, gastrointestinal pathology, hematopathology/hematology, renal pathology, and neuropathology. Data for general surgical pathologists and cytopathologists were included for comparison. For this analysis, survey data were available for 168 practitioners in 43 US academic departments of pathology. Salary ranges varied little among subspecialties, with the exception of dermatopathology, where salaries were higher. In contrast, work relative value unit productivity varied widely among different subspecialties, with median values differing as much as 4- to 7-fold between subspecialties. These results suggest that the use of a single overall work relative value unit standard is not appropriate for specialty- or subspecialty-based anatomic pathology practice, and that either the benchmark norms should be tailored to individual practice patterns, or an alternative system of workload measurement should be developed

    Quality Improvement Intervention for Reduction of Redundant Testing

    No full text
    Laboratory data are critical to analyzing and improving clinical quality. In the setting of residual use of creatine kinase M and B isoenzyme testing for myocardial infarction, we assessed disease outcomes of discordant creatine kinase M and B isoenzyme +/troponin I (−) test pairs in order to address anticipated clinician concerns about potential loss of case-finding sensitivity following proposed discontinuation of routine creatine kinase and creatine kinase M and B isoenzyme testing. Time-sequenced interventions were introduced. The main outcome was the percentage of cardiac marker studies performed within guidelines. Nonguideline orders dominated at baseline. Creatine kinase M and B isoenzyme testing in 7496 order sets failed to detect additional myocardial infarctions but was associated with 42 potentially preventable admissions/quarter. Interruptive computerized soft stops improved guideline compliance from 32.3% to 58% ( P 80% ( P < .001) with peer leadership that featured dashboard feedback about test order performance. This successful experience was recapitulated in interrupted time series within 2 additional services within facility 1 and then in 2 external hospitals (including a critical access facility). Improvements have been sustained postintervention. Laboratory cost savings at the academic facility were estimated to be ≥US$635 000 per year. National collaborative data indicated that facility 1 improved its order patterns from fourth to first quartile compared to peer norms and imply that nonguideline orders persist elsewhere. This example illustrates how pathologists can provide leadership in assisting clinicians in changing laboratory ordering practices. We found that clinicians respond to local laboratory data about their own test performance and that evidence suggesting harm is more compelling to clinicians than evidence of cost savings. Our experience indicates that interventions done at an academic facility can be readily instituted by private practitioners at external facilities. The intervention data also supplement existing literature that electronic order interruptions are more successful when combined with modalities that rely on peer education combined with dashboard feedback about laboratory order performance. The findings may have implications for the role of the pathology laboratory in the ongoing pivot from quantity-based to value-based health care
    corecore